9 research outputs found
Semantic Communications for Image Recovery and Classification via Deep Joint Source and Channel Coding
With the recent advancements in edge artificial intelligence (AI), future
sixth-generation (6G) networks need to support new AI tasks such as
classification and clustering apart from data recovery. Motivated by the
success of deep learning, the semantic-aware and task-oriented communications
with deep joint source and channel coding (JSCC) have emerged as new paradigm
shifts in 6G from the conventional data-oriented communications with separate
source and channel coding (SSCC). However, most existing works focused on the
deep JSCC designs for one task of data recovery or AI task execution
independently, which cannot be transferred to other unintended tasks.
Differently, this paper investigates the JSCC semantic communications to
support multi-task services, by performing the image data recovery and
classification task execution simultaneously. First, we propose a new
end-to-end deep JSCC framework by unifying the coding rate reduction
maximization and the mean square error (MSE) minimization in the loss function.
Here, the coding rate reduction maximization facilitates the learning of
discriminative features for enabling to perform classification tasks directly
in the feature space, and the MSE minimization helps the learning of
informative features for high-quality image data recovery. Next, to further
improve the robustness against variational wireless channels, we propose a new
gated deep JSCC design, in which a gated net is incorporated for adaptively
pruning the output features to adjust their dimensions based on channel
conditions. Finally, we present extensive numerical experiments to validate the
performance of our proposed deep JSCC designs as compared to various benchmark
schemes
UniMa at SemEval-2018 Task 7 : semantic relation extraction and classification from scientific publications
Large repositories of scientific literature
call for the development of robust methods
to extract information from scholarly
papers. This problem is addressed by the
SemEval 2018 Task 7 on extracting and
classifying relations found within scientific
publications. In this paper, we present
a feature-based and a deep learning-based
approach to the task and discuss the results
of the system runs that we submitted for
evaluation
Pushing AI to Wireless Network Edge: An Overview on Integrated Sensing, Communication, and Computation towards 6G
Pushing artificial intelligence (AI) from central cloud to network edge has
reached board consensus in both industry and academia for materializing the
vision of artificial intelligence of things (AIoT) in the sixth-generation (6G)
era. This gives rise to an emerging research area known as edge intelligence,
which concerns the distillation of human-like intelligence from the huge amount
of data scattered at wireless network edge. In general, realizing edge
intelligence corresponds to the process of sensing, communication, and
computation, which are coupled ingredients for data generation, exchanging, and
processing, respectively. However, conventional wireless networks design the
sensing, communication, and computation separately in a task-agnostic manner,
which encounters difficulties in accommodating the stringent demands of
ultra-low latency, ultra-high reliability, and high capacity in emerging AI
applications such as auto-driving. This thus prompts a new design paradigm of
seamless integrated sensing, communication, and computation (ISCC) in a
task-oriented manner, which comprehensively accounts for the use of the data in
the downstream AI applications. In view of its growing interest, this article
provides a timely overview of ISCC for edge intelligence by introducing its
basic concept, design challenges, and enabling techniques, surveying the
state-of-the-art development, and shedding light on the road ahead
An Overview on IEEE 802.11bf: WLAN Sensing
With recent advancements, the wireless local area network (WLAN) or wireless
fidelity (Wi-Fi) technology has been successfully utilized to realize sensing
functionalities such as detection, localization, and recognition. However, the
WLANs standards are developed mainly for the purpose of communication, and thus
may not be able to meet the stringent requirements for emerging sensing
applications. To resolve this issue, a new Task Group (TG), namely IEEE
802.11bf, has been established by the IEEE 802.11 working group, with the
objective of creating a new amendment to the WLAN standard to meet advanced
sensing requirements while minimizing the effect on communications. This paper
provides a comprehensive overview on the up-to-date efforts in the IEEE
802.11bf TG. First, we introduce the definition of the 802.11bf amendment and
its formation and standardization timeline. Next, we discuss the WLAN sensing
use cases with the corresponding key performance indicator (KPI) requirements.
After reviewing previous WLAN sensing research based on communication-oriented
WLAN standards, we identify their limitations and underscore the practical need
for the new sensing-oriented amendment in 802.11bf. Furthermore, we discuss the
WLAN sensing framework and procedure used for measurement acquisition, by
considering both sensing at sub-7GHz and directional multi-gigabit (DMG)
sensing at 60 GHz, respectively, and address their shared features,
similarities, and differences. In addition, we present various candidate
technical features for IEEE 802.11bf, including waveform/sequence design,
feedback types, as well as quantization and compression techniques. We also
describe the methodologies and the channel modeling used by the IEEE 802.11bf
TG for evaluation. Finally, we discuss the challenges and future research
directions to motivate more research endeavors towards this field in details.Comment: 31 pages, 25 figures, this is a significant updated version of
arXiv:2207.0485
Application of Angiotensin Receptor–Neprilysin Inhibitor in Chronic Kidney Disease Patients: Chinese Expert Consensus
Chronic kidney disease (CKD) is a global public health problem, and cardiovascular disease is the most common cause of death in patients with CKD. The incidence and prevalence of cardiovascular events during the early stages of CKD increases significantly with a decline in renal function. More than 50% of dialysis patients die from cardiovascular disease, including coronary heart disease, heart failure, arrhythmia, and sudden cardiac death. Therefore, developing effective methods to control risk factors and improve prognosis is the primary focus during the diagnosis and treatment of CKD. For example, the SPRINT study demonstrated that CKD drugs are effective in reducing cardiovascular and cerebrovascular events by controlling blood pressure. Uncontrolled blood pressure not only increases the risk of these events but also accelerates the progression of CKD. A co-crystal complex of sacubitril, which is a neprilysin inhibitor, and valsartan, which is an angiotensin receptor blockade, has the potential to be widely used against CKD. Sacubitril inhibits neprilysin, which further reduces the degradation of natriuretic peptides and enhances the beneficial effects of the natriuretic peptide system. In contrast, valsartan alone can block the angiotensin II-1 (AT1) receptor and therefore inhibit the renin–angiotensin–aldosterone system. These two components can act synergistically to relax blood vessels, prevent and reverse cardiovascular remodeling, and promote natriuresis. Recent studies have repeatedly confirmed that the first and so far the only angiotensin receptor–neprilysin inhibitor (ARNI) sacubitril/valsartan can reduce blood pressure more effectively than renin–angiotensin system inhibitors and improve the prognosis of heart failure in patients with CKD. Here, we propose clinical recommendations based on an expert consensus to guide ARNI-based therapeutics and reduce the occurrence of cardiovascular events in patients with CKD
Recommended from our members
A digital mask to safeguard patient privacy
Acknowledgements: This study was supported by the Science and Technology Planning Projects of Guangdong Province (2018B010109008 to H.L.), the National Key R&D Program of China (2018YFA0704000 to F.X.), the National Natural Science Foundation of China (82171035 and 81770967 to H.L. and 62088102 to Q.D.), Beijing Natural Science Foundation (JQ19015 to F.X.), Guangzhou Key Laboratory Project (202002010006 to H.L.), the Institute for Brain and Cognitive Science, Tsinghua University (to Q.D.), Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (to Q.D.) and Hainan Province Clinical Medical Center (H.L.). These sponsors and funding organizations had no role in the design or performance of this study. P.Y.W.M. is supported by an Advanced Fellowship Award (NIHR301696) from the UK National Institute of Health Research (NIHR). P.Y.W.M. also receives funding from Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Center (UK), the International Foundation for Optic Nerve Disease, the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Center (BRC-1215-20014) and the NIHR Biomedical Research Center based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.Funder: Science and Technology Planning Projects of Guangdong Province (2018B010109008);Guangzhou Key Laboratory Project (202002010006);Hainan Province Clinical Medical CenterFunder: Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Centre (UK), the International Foundation for Optic Nerve Disease (IFOND), the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014), and the NIHR Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of OphthalmologyFunder: the National Key R&D Program of China (2018YFA0704000);Beijing Natural Science Foundation (JQ19015)Funder: the Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS);Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (BLBCI)The storage of facial images in medical records poses privacy risks due to the sensitive nature of the personal biometric information that can be extracted from such images. To minimize these risks, we developed a new technology, called the digital mask (DM), which is based on three-dimensional reconstruction and deep-learning algorithms to irreversibly erase identifiable features, while retaining disease-relevant features needed for diagnosis. In a prospective clinical study to evaluate the technology for diagnosis of ocular conditions, we found very high diagnostic consistency between the use of original and reconstructed facial videos (κ ≥ 0.845 for strabismus, ptosis and nystagmus, and κ = 0.801 for thyroid-associated orbitopathy) and comparable diagnostic accuracy (P ≥ 0.131 for all ocular conditions tested) was observed. Identity removal validation using multiple-choice questions showed that compared to image cropping, the DM could much more effectively remove identity attributes from facial images. We further confirmed the ability of the DM to evade recognition systems using artificial intelligence-powered re-identification algorithms. Moreover, use of the DM increased the willingness of patients with ocular conditions to provide their facial images as health information during medical treatment. These results indicate the potential of the DM algorithm to protect the privacy of patients’ facial images in an era of rapid adoption of digital health technologies
Recommended from our members
A digital mask to safeguard patient privacy
Acknowledgements: This study was supported by the Science and Technology Planning Projects of Guangdong Province (2018B010109008 to H.L.), the National Key R&D Program of China (2018YFA0704000 to F.X.), the National Natural Science Foundation of China (82171035 and 81770967 to H.L. and 62088102 to Q.D.), Beijing Natural Science Foundation (JQ19015 to F.X.), Guangzhou Key Laboratory Project (202002010006 to H.L.), the Institute for Brain and Cognitive Science, Tsinghua University (to Q.D.), Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (to Q.D.) and Hainan Province Clinical Medical Center (H.L.). These sponsors and funding organizations had no role in the design or performance of this study. P.Y.W.M. is supported by an Advanced Fellowship Award (NIHR301696) from the UK National Institute of Health Research (NIHR). P.Y.W.M. also receives funding from Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Center (UK), the International Foundation for Optic Nerve Disease, the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Center (BRC-1215-20014) and the NIHR Biomedical Research Center based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.Funder: Science and Technology Planning Projects of Guangdong Province (2018B010109008);Guangzhou Key Laboratory Project (202002010006);Hainan Province Clinical Medical CenterFunder: Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Centre (UK), the International Foundation for Optic Nerve Disease (IFOND), the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014), and the NIHR Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of OphthalmologyFunder: the National Key R&D Program of China (2018YFA0704000);Beijing Natural Science Foundation (JQ19015)Funder: the Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS);Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (BLBCI)The storage of facial images in medical records poses privacy risks due to the sensitive nature of the personal biometric information that can be extracted from such images. To minimize these risks, we developed a new technology, called the digital mask (DM), which is based on three-dimensional reconstruction and deep-learning algorithms to irreversibly erase identifiable features, while retaining disease-relevant features needed for diagnosis. In a prospective clinical study to evaluate the technology for diagnosis of ocular conditions, we found very high diagnostic consistency between the use of original and reconstructed facial videos (κ ≥ 0.845 for strabismus, ptosis and nystagmus, and κ = 0.801 for thyroid-associated orbitopathy) and comparable diagnostic accuracy (P ≥ 0.131 for all ocular conditions tested) was observed. Identity removal validation using multiple-choice questions showed that compared to image cropping, the DM could much more effectively remove identity attributes from facial images. We further confirmed the ability of the DM to evade recognition systems using artificial intelligence-powered re-identification algorithms. Moreover, use of the DM increased the willingness of patients with ocular conditions to provide their facial images as health information during medical treatment. These results indicate the potential of the DM algorithm to protect the privacy of patients’ facial images in an era of rapid adoption of digital health technologies
A digital mask to safeguard patient privacy.
Funder: Science and Technology Planning Projects of Guangdong Province (2018B010109008);Guangzhou Key Laboratory Project (202002010006);Hainan Province Clinical Medical CenterFunder: Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Centre (UK), the International Foundation for Optic Nerve Disease (IFOND), the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014), and the NIHR Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of OphthalmologyFunder: the National Key R&D Program of China (2018YFA0704000);Beijing Natural Science Foundation (JQ19015)Funder: the Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS);Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission (BLBCI)The storage of facial images in medical records poses privacy risks due to the sensitive nature of the personal biometric information that can be extracted from such images. To minimize these risks, we developed a new technology, called the digital mask (DM), which is based on three-dimensional reconstruction and deep-learning algorithms to irreversibly erase identifiable features, while retaining disease-relevant features needed for diagnosis. In a prospective clinical study to evaluate the technology for diagnosis of ocular conditions, we found very high diagnostic consistency between the use of original and reconstructed facial videos (κ ≥ 0.845 for strabismus, ptosis and nystagmus, and κ = 0.801 for thyroid-associated orbitopathy) and comparable diagnostic accuracy (P ≥ 0.131 for all ocular conditions tested) was observed. Identity removal validation using multiple-choice questions showed that compared to image cropping, the DM could much more effectively remove identity attributes from facial images. We further confirmed the ability of the DM to evade recognition systems using artificial intelligence-powered re-identification algorithms. Moreover, use of the DM increased the willingness of patients with ocular conditions to provide their facial images as health information during medical treatment. These results indicate the potential of the DM algorithm to protect the privacy of patients' facial images in an era of rapid adoption of digital health technologies